47 research outputs found
Linear-Model-inspired Neural Network for Electromagnetic Inverse Scattering
Electromagnetic inverse scattering problems (ISPs) aim to retrieve
permittivities of dielectric scatterers from the scattering measurement. It is
often highly nonlinear, caus-ing the problem to be very difficult to solve. To
alleviate the issue, this letter exploits a linear model-based network (LMN)
learning strategy, which benefits from both model complexity and data learning.
By introducing a linear model for ISPs, a new model with network-driven
regular-izer is proposed. For attaining efficient end-to-end learning, the
network architecture and hyper-parameter estimation are presented. Experimental
results validate its superiority to some state-of-the-arts.Comment: 5 pages, 6 figures 3 table
Stage-by-stage Wavelet Optimization Refinement Diffusion Model for Sparse-View CT Reconstruction
Diffusion models have emerged as potential tools to tackle the challenge of
sparse-view CT reconstruction, displaying superior performance compared to
conventional methods. Nevertheless, these prevailing diffusion models
predominantly focus on the sinogram or image domains, which can lead to
instability during model training, potentially culminating in convergence
towards local minimal solutions. The wavelet trans-form serves to disentangle
image contents and features into distinct frequency-component bands at varying
scales, adeptly capturing diverse directional structures. Employing the Wavelet
transform as a guiding sparsity prior significantly enhances the robustness of
diffusion models. In this study, we present an innovative approach named the
Stage-by-stage Wavelet Optimization Refinement Diffusion (SWORD) model for
sparse-view CT reconstruction. Specifically, we establish a unified
mathematical model integrating low-frequency and high-frequency generative
models, achieving the solution with optimization procedure. Furthermore, we
perform the low-frequency and high-frequency generative models on wavelet's
decomposed components rather than sinogram or image domains, ensuring the
stability of model training. Our method rooted in established optimization
theory, comprising three distinct stages, including low-frequency generation,
high-frequency refinement and domain transform. Our experimental results
demonstrate that the proposed method outperforms existing state-of-the-art
methods both quantitatively and qualitatively
Generative Modeling in Structural-Hankel Domain for Color Image Inpainting
In recent years, some researchers focused on using a single image to obtain a
large number of samples through multi-scale features. This study intends to a
brand-new idea that requires only ten or even fewer samples to construct the
low-rank structural-Hankel matrices-assisted score-based generative model
(SHGM) for color image inpainting task. During the prior learning process, a
certain amount of internal-middle patches are firstly extracted from several
images and then the structural-Hankel matrices are constructed from these
patches. To better apply the score-based generative model to learn the internal
statistical distribution within patches, the large-scale Hankel matrices are
finally folded into the higher dimensional tensors for prior learning. During
the iterative inpainting process, SHGM views the inpainting problem as a
conditional generation procedure in low-rank environment. As a result, the
intermediate restored image is acquired by alternatively performing the
stochastic differential equation solver, alternating direction method of
multipliers, and data consistency steps. Experimental results demonstrated the
remarkable performance and diversity of SHGM.Comment: 11 pages, 10 figure
Low-rank Tensor Assisted K-space Generative Model for Parallel Imaging Reconstruction
Although recent deep learning methods, especially generative models, have
shown good performance in fast magnetic resonance imaging, there is still much
room for improvement in high-dimensional generation. Considering that internal
dimensions in score-based generative models have a critical impact on
estimating the gradient of the data distribution, we present a new idea,
low-rank tensor assisted k-space generative model (LR-KGM), for parallel
imaging reconstruction. This means that we transform original prior information
into high-dimensional prior information for learning. More specifically, the
multi-channel data is constructed into a large Hankel matrix and the matrix is
subsequently folded into tensor for prior learning. In the testing phase, the
low-rank rotation strategy is utilized to impose low-rank constraints on tensor
output of the generative network. Furthermore, we alternately use traditional
generative iterations and low-rank high-dimensional tensor iterations for
reconstruction. Experimental comparisons with the state-of-the-arts
demonstrated that the proposed LR-KGM method achieved better performance
Correlated and Multi-frequency Diffusion Modeling for Highly Under-sampled MRI Reconstruction
Most existing MRI reconstruction methods perform tar-geted reconstruction of
the entire MR image without tak-ing specific tissue regions into consideration.
This may fail to emphasize the reconstruction accuracy on im-portant tissues
for diagnosis. In this study, leveraging a combination of the properties of
k-space data and the diffusion process, our novel scheme focuses on mining the
multi-frequency prior with different strategies to pre-serve fine texture
details in the reconstructed image. In addition, a diffusion process can
converge more quickly if its target distribution closely resembles the noise
distri-bution in the process. This can be accomplished through various
high-frequency prior extractors. The finding further solidifies the
effectiveness of the score-based gen-erative model. On top of all the
advantages, our method improves the accuracy of MRI reconstruction and
accel-erates sampling process. Experimental results verify that the proposed
method successfully obtains more accurate reconstruction and outperforms
state-of-the-art methods